1,637 research outputs found
A Broad Class of Discrete-Time Hypercomplex-Valued Hopfield Neural Networks
In this paper, we address the stability of a broad class of discrete-time
hypercomplex-valued Hopfield-type neural networks. To ensure the neural
networks belonging to this class always settle down at a stationary state, we
introduce novel hypercomplex number systems referred to as real-part
associative hypercomplex number systems. Real-part associative hypercomplex
number systems generalize the well-known Cayley-Dickson algebras and real
Clifford algebras and include the systems of real numbers, complex numbers,
dual numbers, hyperbolic numbers, quaternions, tessarines, and octonions as
particular instances. Apart from the novel hypercomplex number systems, we
introduce a family of hypercomplex-valued activation functions called
-projection functions. Broadly speaking, a
-projection function projects the activation potential onto the
set of all possible states of a hypercomplex-valued neuron. Using the theory
presented in this paper, we confirm the stability analysis of several
discrete-time hypercomplex-valued Hopfield-type neural networks from the
literature. Moreover, we introduce and provide the stability analysis of a
general class of Hopfield-type neural networks on Cayley-Dickson algebras
Understanding Vector-Valued Neural Networks and Their Relationship with Real and Hypercomplex-Valued Neural Networks
Despite the many successful applications of deep learning models for
multidimensional signal and image processing, most traditional neural networks
process data represented by (multidimensional) arrays of real numbers. The
intercorrelation between feature channels is usually expected to be learned
from the training data, requiring numerous parameters and careful training. In
contrast, vector-valued neural networks are conceived to process arrays of
vectors and naturally consider the intercorrelation between feature channels.
Consequently, they usually have fewer parameters and often undergo more robust
training than traditional neural networks. This paper aims to present a broad
framework for vector-valued neural networks, referred to as V-nets. In this
context, hypercomplex-valued neural networks are regarded as vector-valued
models with additional algebraic properties. Furthermore, this paper explains
the relationship between vector-valued and traditional neural networks.
Precisely, a vector-valued neural network can be obtained by placing
restrictions on a real-valued model to consider the intercorrelation between
feature channels. Finally, we show how V-nets, including hypercomplex-valued
neural networks, can be implemented in current deep-learning libraries as
real-valued networks
An Introduction to Quaternion-Valued Recurrent Projection Neural Networks
Hypercomplex-valued neural networks, including quaternion-valued neural
networks, can treat multi-dimensional data as a single entity. In this paper,
we introduce the quaternion-valued recurrent projection neural networks
(QRPNNs). Briefly, QRPNNs are obtained by combining the non-local projection
learning with the quaternion-valued recurrent correlation neural network
(QRCNNs). We show that QRPNNs overcome the cross-talk problem of QRCNNs. Thus,
they are appropriate to implement associative memories. Furthermore,
computational experiments reveal that QRPNNs exhibit greater storage capacity
and noise tolerance than their corresponding QRCNNs.Comment: Accepted to be Published in: Proceedings of the 8th Brazilian
Conference on Intelligent Systems (BRACIS 2019), October 15-18, 2019,
Salvador, BA, Brazi
Extending the Universal Approximation Theorem for a Broad Class of Hypercomplex-Valued Neural Networks
The universal approximation theorem asserts that a single hidden layer neural
network approximates continuous functions with any desired precision on compact
sets. As an existential result, the universal approximation theorem supports
the use of neural networks for various applications, including regression and
classification tasks. The universal approximation theorem is not limited to
real-valued neural networks but also holds for complex, quaternion, tessarines,
and Clifford-valued neural networks. This paper extends the universal
approximation theorem for a broad class of hypercomplex-valued neural networks.
Precisely, we first introduce the concept of non-degenerate hypercomplex
algebra. Complex numbers, quaternions, and tessarines are examples of
non-degenerate hypercomplex algebras. Then, we state the universal
approximation theorem for hypercomplex-valued neural networks defined on a
non-degenerate algebra
- …